24 research outputs found
Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
Text-based adventure games provide a platform on which to explore
reinforcement learning in the context of a combinatorial action space, such as
natural language. We present a deep reinforcement learning architecture that
represents the game state as a knowledge graph which is learned during
exploration. This graph is used to prune the action space, enabling more
efficient exploration. The question of which action to take can be reduced to a
question-answering task, a form of transfer learning that pre-trains certain
parts of our architecture. In experiments using the TextWorld framework, we
show that our proposed technique can learn a control policy faster than
baseline alternatives. We have also open-sourced our code at
https://github.com/rajammanabrolu/KG-DQN.Comment: Proceedings of NAACL-HLT 201
Language Learning in Interactive Environments
Natural language communication has long been considered a defining characteristic of human intelligence. I am motivated by the question of how learning agents can understand and generate contextually relevant natural language in service of achieving a goal. In pursuit of this objective, I have been studying Interactive Narratives, or text-adventures: simulations in which an agent interacts with the world purely through natural language—"seeing” and “acting upon” the world using textual descriptions and commands. These games are usually structured as puzzles or quests in which a player must complete a sequence of actions to succeed. My work studies two closely related aspects of Interactive Narratives: operating in these environments and creating them in addition to their intersection—each presenting its own set of unique challenges.
Operating in these environments presents three challenges: (1) Knowledge representation—an agent must maintain a persistent memory of what it has learned through its experiences with a partially observable world; (2) Commonsense reasoning to endow the agent with priors on how to interact with the world around it; and (3) Scaling to effectively explore sparse-reward, combinatorially-sized natural language state-action spaces. On the other hand, creating these environments can be split into two complementary considerations: (1) World generation, or the problem of creating a world that defines the limits of the actions an agent can perform; and (2) Quest generation, i.e. defining actionable objectives grounded in a given world. I will present my work thus far—showcasing how structured, interpretable data representations in the form of knowledge graphs aid in each of these tasks—in addition to proposing how exactly these two aspects of Interactive Narratives can be combined to improve language learning and generalization across this board of challenges.Ph.D
Inherently Explainable Reinforcement Learning in Natural Language
We focus on the task of creating a reinforcement learning agent that is
inherently explainable -- with the ability to produce immediate local
explanations by thinking out loud while performing a task and analyzing entire
trajectories post-hoc to produce causal explanations. This Hierarchically
Explainable Reinforcement Learning agent (HEX-RL), operates in Interactive
Fictions, text-based game environments in which an agent perceives and acts
upon the world using textual natural language. These games are usually
structured as puzzles or quests with long-term dependencies in which an agent
must complete a sequence of actions to succeed -- providing ideal environments
in which to test an agent's ability to explain its actions. Our agent is
designed to treat explainability as a first-class citizen, using an extracted
symbolic knowledge graph-based state representation coupled with a Hierarchical
Graph Attention mechanism that points to the facts in the internal graph
representation that most influenced the choice of actions. Experiments show
that this agent provides significantly improved explanations over strong
baselines, as rated by human participants generally unfamiliar with the
environment, while also matching state-of-the-art task performance
Interactive Fiction Games: A Colossal Adventure
A hallmark of human intelligence is the ability to understand and communicate
with language. Interactive Fiction games are fully text-based simulation
environments where a player issues text commands to effect change in the
environment and progress through the story. We argue that IF games are an
excellent testbed for studying language-based autonomous agents. In particular,
IF games combine challenges of combinatorial action spaces, language
understanding, and commonsense reasoning. To facilitate rapid development of
language-based agents, we introduce Jericho, a learning environment for
man-made IF games and conduct a comprehensive study of text-agents across a
rich set of games, highlighting directions in which agents can improve
Automated Storytelling via Causal, Commonsense Plot Ordering
Automated story plot generation is the task of generating a coherent sequence
of plot events. Causal relations between plot events are believed to increase
the perception of story and plot coherence. In this work, we introduce the
concept of soft causal relations as causal relations inferred from commonsense
reasoning. We demonstrate C2PO, an approach to narrative generation that
operationalizes this concept through Causal, Commonsense Plot Ordering. Using
human-participant protocols, we evaluate our system against baseline systems
with different commonsense reasoning reasoning and inductive biases to
determine the role of soft causal relations in perceived story quality. Through
these studies we also probe the interplay of how changes in commonsense norms
across storytelling genres affect perceptions of story quality.Comment: AAAI-21 Camera Ready Versio